Instructions

The following tutorial will let you reproduce the plots that we are going to create at the workshop using R.
Please read carefully and follow the steps. Wherever you see the Code icon on the right you can click on it to see the actual code used in that section.

Introduction

This tutorial will focus on analysing the updated data of the worldwide Novel Corona virus (COVID-19) pandemic.
There are several data sources available online. We will use the data collected from a range of official sources and hosted on the Our World in Data website (Mathieu et al. 2021).

Analyse Data in R

To run R and RStudio on Binder, click on this badge - Launch Rstudio Binder.

Start RStudio and create a new project named Workshop3 in a new folder (if you need a reminder ho to do it, check out Workshop1 Tutorial on BB).
Once RStudio restarts inside the project’s folder, create a new R script named Workshop3.R and 2 new folders, one named data for our input data and another named output for our plots.

Install Extra Packages

For this analysis we will again use some packages from the Tidyverse, but this time we load the specific packages (which are supposed to be pre-installed on your computers) to try and avoid having to download the entire tidyverse. In addition to the Tidyverse packages we’ve got to know in the previous workshop we will use the plotly package to create interactive plots, paletteer for custom color palettes, readxl to read MS-Excel file, scales to format large numbers, lubridate to better handle dates, glue to paste together strings, patchwork to include several plots in a single figure and a few others to assist in getting the data into shape.
To install these packages, we will introduce a package called pacman that will assist in loading the required packages and installing them if they’re not already installed. To install it we use the install.packages('pacman') command, please note that the package name need to be quoted and that we only need to perform it once, or when we want or need to update the package. Once the package was installed, we can load its functions using the library(pacman) command and then load/install all the other packages at once with p_load() function.

# install required packages - needed only once! (comment with a # after first use)
install.packages('pacman')
# load required packages
library(pacman)
p_load(dplyr, tidyr, ggplot2, readr,  paletteer, glue, scales, plotly, lubridate, patchwork, visdat)

More information on installing and using R packages can be found in this tutorial.

Read Data

Now that we’ve got RStudio up and running and our packages installed and loaded, we can read data into R from our local computer or from web locations using dedicated functions specific to the file type (.csv, .txt, .xlsx, etc.).

We will use the read_csv() command/function from the readr package (part of the tidyverse) to load the data directly from a file on the the Our World in Data website into a variable of type data frame (table). If we don’t want to use external packages, we can use the read.csv() function from base R, which won’t automatically parse columns containing dates and in previous versions of R (< 4.0) will slightly change the structure of the resulting data frame (all text columns will be converted into factors).
> Note that in this case, we need to specify the column types because the data contains a lot of missing values that interfere with the automatic parsing of the column types._

# read data from Our World in Data website
covid_data <- read_csv("https://covid.ourworldindata.org/data/owid-covid-data.csv", 
                       col_types = paste(c("c", "f", "c", "D", rep("d", 29), 
                                           "c", rep("d", 26)), collapse = ""))

Data Exploration

Let’s use built-in functions for a brief data exploration, such as head() to show the first 10 rows of the data and str() for the type of data in each column:

#explore the data frame
head(covid_data) # show first 10 rows of the data and typr of variables
## # A tibble: 6 x 60
##   iso_code continent location    date       total_cases new_cases new_cases_smoot~
##   <chr>    <fct>     <chr>       <date>           <dbl>     <dbl>            <dbl>
## 1 AFG      Asia      Afghanistan 2020-02-24           1         1           NA    
## 2 AFG      Asia      Afghanistan 2020-02-25           1         0           NA    
## 3 AFG      Asia      Afghanistan 2020-02-26           1         0           NA    
## 4 AFG      Asia      Afghanistan 2020-02-27           1         0           NA    
## 5 AFG      Asia      Afghanistan 2020-02-28           1         0           NA    
## 6 AFG      Asia      Afghanistan 2020-02-29           1         0            0.143
## # ... with 53 more variables: total_deaths <dbl>, new_deaths <dbl>,
## #   new_deaths_smoothed <dbl>, total_cases_per_million <dbl>,
## #   new_cases_per_million <dbl>, new_cases_smoothed_per_million <dbl>,
## #   total_deaths_per_million <dbl>, new_deaths_per_million <dbl>,
## #   new_deaths_smoothed_per_million <dbl>, reproduction_rate <dbl>,
## #   icu_patients <dbl>, icu_patients_per_million <dbl>, hosp_patients <dbl>,
## #   hosp_patients_per_million <dbl>, weekly_icu_admissions <dbl>, ...
str(covid_data) # show data structure
## spec_tbl_df [106,356 x 60] (S3: spec_tbl_df/tbl_df/tbl/data.frame)
##  $ iso_code                             : chr [1:106356] "AFG" "AFG" "AFG" "AFG" ...
##  $ continent                            : Factor w/ 6 levels "Asia","Europe",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ location                             : chr [1:106356] "Afghanistan" "Afghanistan" "Afghanistan" "Afghanistan" ...
##  $ date                                 : Date[1:106356], format: "2020-02-24" "2020-02-25" ...
##  $ total_cases                          : num [1:106356] 1 1 1 1 1 1 1 1 2 4 ...
##  $ new_cases                            : num [1:106356] 1 0 0 0 0 0 0 0 1 2 ...
##  $ new_cases_smoothed                   : num [1:106356] NA NA NA NA NA 0.143 0.143 0 0.143 0.429 ...
##  $ total_deaths                         : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_deaths                           : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_deaths_smoothed                  : num [1:106356] NA NA NA NA NA 0 0 0 0 0 ...
##  $ total_cases_per_million              : num [1:106356] 0.026 0.026 0.026 0.026 0.026 0.026 0.026 0.026 0.051 0.103 ...
##  $ new_cases_per_million                : num [1:106356] 0.026 0 0 0 0 0 0 0 0.026 0.051 ...
##  $ new_cases_smoothed_per_million       : num [1:106356] NA NA NA NA NA 0.004 0.004 0 0.004 0.011 ...
##  $ total_deaths_per_million             : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_deaths_per_million               : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_deaths_smoothed_per_million      : num [1:106356] NA NA NA NA NA 0 0 0 0 0 ...
##  $ reproduction_rate                    : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ icu_patients                         : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ icu_patients_per_million             : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ hosp_patients                        : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ hosp_patients_per_million            : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ weekly_icu_admissions                : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ weekly_icu_admissions_per_million    : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ weekly_hosp_admissions               : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ weekly_hosp_admissions_per_million   : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_tests                            : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ total_tests                          : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ total_tests_per_thousand             : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_tests_per_thousand               : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_tests_smoothed                   : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_tests_smoothed_per_thousand      : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ positive_rate                        : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ tests_per_case                       : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ tests_units                          : chr [1:106356] NA NA NA NA ...
##  $ total_vaccinations                   : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ people_vaccinated                    : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ people_fully_vaccinated              : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_vaccinations                     : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_vaccinations_smoothed            : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ total_vaccinations_per_hundred       : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ people_vaccinated_per_hundred        : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ people_fully_vaccinated_per_hundred  : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ new_vaccinations_smoothed_per_million: num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ stringency_index                     : num [1:106356] 8.33 8.33 8.33 8.33 8.33 ...
##  $ population                           : num [1:106356] 38928341 38928341 38928341 38928341 38928341 ...
##  $ population_density                   : num [1:106356] 54.4 54.4 54.4 54.4 54.4 ...
##  $ median_age                           : num [1:106356] 18.6 18.6 18.6 18.6 18.6 18.6 18.6 18.6 18.6 18.6 ...
##  $ aged_65_older                        : num [1:106356] 2.58 2.58 2.58 2.58 2.58 ...
##  $ aged_70_older                        : num [1:106356] 1.34 1.34 1.34 1.34 1.34 ...
##  $ gdp_per_capita                       : num [1:106356] 1804 1804 1804 1804 1804 ...
##  $ extreme_poverty                      : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ cardiovasc_death_rate                : num [1:106356] 597 597 597 597 597 ...
##  $ diabetes_prevalence                  : num [1:106356] 9.59 9.59 9.59 9.59 9.59 9.59 9.59 9.59 9.59 9.59 ...
##  $ female_smokers                       : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ male_smokers                         : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  $ handwashing_facilities               : num [1:106356] 37.7 37.7 37.7 37.7 37.7 ...
##  $ hospital_beds_per_thousand           : num [1:106356] 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 ...
##  $ life_expectancy                      : num [1:106356] 64.8 64.8 64.8 64.8 64.8 ...
##  $ human_development_index              : num [1:106356] 0.511 0.511 0.511 0.511 0.511 0.511 0.511 0.511 0.511 0.511 ...
##  $ excess_mortality                     : num [1:106356] NA NA NA NA NA NA NA NA NA NA ...
##  - attr(*, "spec")=
##   .. cols(
##   ..   iso_code = col_character(),
##   ..   continent = col_factor(levels = NULL, ordered = FALSE, include_na = FALSE),
##   ..   location = col_character(),
##   ..   date = col_date(format = ""),
##   ..   total_cases = col_double(),
##   ..   new_cases = col_double(),
##   ..   new_cases_smoothed = col_double(),
##   ..   total_deaths = col_double(),
##   ..   new_deaths = col_double(),
##   ..   new_deaths_smoothed = col_double(),
##   ..   total_cases_per_million = col_double(),
##   ..   new_cases_per_million = col_double(),
##   ..   new_cases_smoothed_per_million = col_double(),
##   ..   total_deaths_per_million = col_double(),
##   ..   new_deaths_per_million = col_double(),
##   ..   new_deaths_smoothed_per_million = col_double(),
##   ..   reproduction_rate = col_double(),
##   ..   icu_patients = col_double(),
##   ..   icu_patients_per_million = col_double(),
##   ..   hosp_patients = col_double(),
##   ..   hosp_patients_per_million = col_double(),
##   ..   weekly_icu_admissions = col_double(),
##   ..   weekly_icu_admissions_per_million = col_double(),
##   ..   weekly_hosp_admissions = col_double(),
##   ..   weekly_hosp_admissions_per_million = col_double(),
##   ..   new_tests = col_double(),
##   ..   total_tests = col_double(),
##   ..   total_tests_per_thousand = col_double(),
##   ..   new_tests_per_thousand = col_double(),
##   ..   new_tests_smoothed = col_double(),
##   ..   new_tests_smoothed_per_thousand = col_double(),
##   ..   positive_rate = col_double(),
##   ..   tests_per_case = col_double(),
##   ..   tests_units = col_character(),
##   ..   total_vaccinations = col_double(),
##   ..   people_vaccinated = col_double(),
##   ..   people_fully_vaccinated = col_double(),
##   ..   new_vaccinations = col_double(),
##   ..   new_vaccinations_smoothed = col_double(),
##   ..   total_vaccinations_per_hundred = col_double(),
##   ..   people_vaccinated_per_hundred = col_double(),
##   ..   people_fully_vaccinated_per_hundred = col_double(),
##   ..   new_vaccinations_smoothed_per_million = col_double(),
##   ..   stringency_index = col_double(),
##   ..   population = col_double(),
##   ..   population_density = col_double(),
##   ..   median_age = col_double(),
##   ..   aged_65_older = col_double(),
##   ..   aged_70_older = col_double(),
##   ..   gdp_per_capita = col_double(),
##   ..   extreme_poverty = col_double(),
##   ..   cardiovasc_death_rate = col_double(),
##   ..   diabetes_prevalence = col_double(),
##   ..   female_smokers = col_double(),
##   ..   male_smokers = col_double(),
##   ..   handwashing_facilities = col_double(),
##   ..   hospital_beds_per_thousand = col_double(),
##   ..   life_expectancy = col_double(),
##   ..   human_development_index = col_double(),
##   ..   excess_mortality = col_double()
##   .. )

Descriptive Statistics

We can also produce some descriptive statistics to better understand the data and the nature of each variable. The summary() function (as can be guessed by its name) provides a quick summary of basic descriptive statistics, such as the mean, min, max and quantiles for continuous numerical values.

# summary of variables in my data
summary(covid_data)
##    iso_code                 continent       location        
##  Length:106356      Asia         :25041   Length:106356     
##  Class :character   Europe       :25137   Class :character  
##  Mode  :character   Africa       :27249   Mode  :character  
##                     North America:13320                     
##                     South America: 6344                     
##                     Oceania      : 4326                     
##                     NA's         : 4939                     
##       date             total_cases          new_cases      new_cases_smoothed
##  Min.   :2020-01-01   Min.   :        1   Min.   :-74347   Min.   : -6223.0  
##  1st Qu.:2020-07-14   1st Qu.:     1479   1st Qu.:     2   1st Qu.:     7.9  
##  Median :2020-11-27   Median :    15408   Median :    77   Median :    96.1  
##  Mean   :2020-11-21   Mean   :  1178243   Mean   :  6146   Mean   :  6149.9  
##  3rd Qu.:2021-04-03   3rd Qu.:   163984   3rd Qu.:   845   3rd Qu.:   893.1  
##  Max.   :2021-07-31   Max.   :197872222   Max.   :905993   Max.   :826368.1  
##                       NA's   :4345        NA's   :4348     NA's   :5358      
##   total_deaths       new_deaths      new_deaths_smoothed
##  Min.   :      1   Min.   :-1918.0   Min.   : -232.143  
##  1st Qu.:     59   1st Qu.:    0.0   1st Qu.:    0.000  
##  Median :    452   Median :    2.0   Median :    1.429  
##  Mean   :  31203   Mean   :  145.6   Mean   :  131.506  
##  3rd Qu.:   4340   3rd Qu.:   18.0   3rd Qu.:   14.714  
##  Max.   :4217383   Max.   :18056.0   Max.   :14735.286  
##  NA's   :14687     NA's   :14532     NA's   :5358       
##  total_cases_per_million new_cases_per_million new_cases_smoothed_per_million
##  Min.   :     0.0        Min.   :-3162.163     Min.   :-276.825              
##  1st Qu.:   284.2        1st Qu.:    0.239     1st Qu.:   1.376              
##  Median :  2085.8        Median :    9.053     Median :  11.957              
##  Mean   : 14563.7        Mean   :   77.375     Mean   :  77.486              
##  3rd Qu.: 15638.2        3rd Qu.:   72.196     3rd Qu.:  80.325              
##  Max.   :189969.6        Max.   :18293.675     Max.   :4083.500              
##  NA's   :4886            NA's   :4889          NA's   :5894                  
##  total_deaths_per_million new_deaths_per_million
##  Min.   :   0.001         Min.   :-76.445       
##  1st Qu.:   8.835         1st Qu.:  0.000       
##  Median :  58.104         Median :  0.137       
##  Mean   : 316.492         Mean   :  1.539       
##  3rd Qu.: 354.097         3rd Qu.:  1.304       
##  Max.   :5955.172         Max.   :218.329       
##  NA's   :15215            NA's   :15060         
##  new_deaths_smoothed_per_million reproduction_rate  icu_patients  
##  Min.   :-10.921                 Min.   :-0.010    Min.   :    0  
##  1st Qu.:  0.000                 1st Qu.: 0.840    1st Qu.:   29  
##  Median :  0.165                 Median : 1.010    Median :  146  
##  Mean   :  1.390                 Mean   : 1.009    Mean   :  989  
##  3rd Qu.:  1.274                 3rd Qu.: 1.180    3rd Qu.:  652  
##  Max.   : 63.140                 Max.   : 5.800    Max.   :28889  
##  NA's   :5894                    NA's   :20614     NA's   :95339  
##  icu_patients_per_million hosp_patients    hosp_patients_per_million
##  Min.   :  0.00           Min.   :     0   Min.   :   0.00          
##  1st Qu.:  3.85           1st Qu.:   107   1st Qu.:  21.06          
##  Median : 14.06           Median :   588   Median :  73.14          
##  Mean   : 24.42           Mean   :  4406   Mean   : 160.33          
##  3rd Qu.: 38.34           3rd Qu.:  2544   3rd Qu.: 227.30          
##  Max.   :192.74           Max.   :133214   Max.   :1532.57          
##  NA's   :95339            NA's   :93051    NA's   :93051            
##  weekly_icu_admissions weekly_icu_admissions_per_million weekly_hosp_admissions
##  Min.   :   0.00       Min.   :  0.00                    Min.   :     0.00     
##  1st Qu.:   7.98       1st Qu.:  1.76                    1st Qu.:    35.01     
##  Median :  39.93       Median :  7.65                    Median :   249.68     
##  Mean   : 249.43       Mean   : 19.20                    Mean   :  3017.57     
##  3rd Qu.: 190.31       3rd Qu.: 21.19                    3rd Qu.:  1307.00     
##  Max.   :4002.46       Max.   :279.41                    Max.   :116323.00     
##  NA's   :105377        NA's   :105377                    NA's   :104475        
##  weekly_hosp_admissions_per_million   new_tests        total_tests       
##  Min.   :   0.00                    Min.   :-239172   Min.   :        0  
##  1st Qu.:   7.56                    1st Qu.:   1743   1st Qu.:   179821  
##  Median :  31.09                    Median :   6450   Median :   912353  
##  Mean   :  93.99                    Mean   :  49639   Mean   :  8628308  
##  3rd Qu.: 105.34                    3rd Qu.:  24822   3rd Qu.:  3803333  
##  Max.   :2656.91                    Max.   :3740296   Max.   :486263680  
##  NA's   :104475                     NA's   :59095     NA's   :59369      
##  total_tests_per_thousand new_tests_per_thousand new_tests_smoothed
##  Min.   :    0.00         Min.   : -6.32         Min.   :      0   
##  1st Qu.:   15.82         1st Qu.:  0.15         1st Qu.:   1840   
##  Median :   79.32         Median :  0.65         Median :   6932   
##  Mean   :  352.03         Mean   :  2.23         Mean   :  47037   
##  3rd Qu.:  329.17         3rd Qu.:  2.02         3rd Qu.:  27450   
##  Max.   :11304.99         Max.   :327.09         Max.   :3080396   
##  NA's   :59369            NA's   :59095          NA's   :51003     
##  new_tests_smoothed_per_thousand positive_rate   tests_per_case   
##  Min.   : 0.00                   Min.   :0.00    Min.   :    1.1  
##  1st Qu.: 0.16                   1st Qu.:0.02    1st Qu.:    7.8  
##  Median : 0.69                   Median :0.05    Median :   19.0  
##  Mean   : 2.13                   Mean   :0.09    Mean   :  163.2  
##  3rd Qu.: 2.08                   3rd Qu.:0.13    3rd Qu.:   58.1  
##  Max.   :90.85                   Max.   :0.93    Max.   :50000.0  
##  NA's   :51003                   NA's   :54643   NA's   :55275    
##  tests_units        total_vaccinations  people_vaccinated  
##  Length:106356      Min.   :0.000e+00   Min.   :0.000e+00  
##  Class :character   1st Qu.:1.556e+05   1st Qu.:1.272e+05  
##  Mode  :character   Median :1.187e+06   Median :8.280e+05  
##                     Mean   :4.255e+07   Mean   :2.310e+07  
##                     3rd Qu.:7.564e+06   3rd Qu.:4.791e+06  
##                     Max.   :4.136e+09   Max.   :2.206e+09  
##                     NA's   :86304       NA's   :87184      
##  people_fully_vaccinated new_vaccinations   new_vaccinations_smoothed
##  Min.   :1.000e+00       Min.   :       0   Min.   :       0         
##  1st Qu.:6.086e+04       1st Qu.:    5033   1st Qu.:     875         
##  Median :5.033e+05       Median :   29148   Median :    7232         
##  Mean   :1.308e+07       Mean   :  745869   Mean   :  356731         
##  3rd Qu.:3.109e+06       3rd Qu.:  162009   3rd Qu.:   48229         
##  Max.   :1.135e+09       Max.   :48884282   Max.   :43389920         
##  NA's   :90112           NA's   :89662      NA's   :71279            
##  total_vaccinations_per_hundred people_vaccinated_per_hundred
##  Min.   :  0.00                 Min.   :  0.00               
##  1st Qu.:  3.05                 1st Qu.:  2.47               
##  Median : 15.35                 Median : 10.91               
##  Mean   : 30.43                 Mean   : 19.24               
##  3rd Qu.: 46.65                 3rd Qu.: 30.89               
##  Max.   :232.72                 Max.   :116.73               
##  NA's   :86304                  NA's   :87184                
##  people_fully_vaccinated_per_hundred new_vaccinations_smoothed_per_million
##  Min.   :  0.00                      Min.   :     0                       
##  1st Qu.:  1.35                      1st Qu.:   417                       
##  Median :  5.92                      Median :  1895                       
##  Mean   : 13.04                      Mean   :  3398                       
##  3rd Qu.: 18.55                      3rd Qu.:  4965                       
##  Max.   :115.99                      Max.   :118759                       
##  NA's   :90112                       NA's   :71279                        
##  stringency_index   population        population_density    median_age   
##  Min.   :  0.00   Min.   :4.700e+01   Min.   :    0.137   Min.   :15.10  
##  1st Qu.: 43.52   1st Qu.:2.142e+06   1st Qu.:   36.253   1st Qu.:22.20  
##  Median : 59.72   Median :9.660e+06   Median :   83.479   Median :29.70  
##  Mean   : 57.89   Mean   :1.236e+08   Mean   :  390.400   Mean   :30.55  
##  3rd Qu.: 74.07   3rd Qu.:3.347e+07   3rd Qu.:  209.588   3rd Qu.:39.10  
##  Max.   :100.00   Max.   :7.795e+09   Max.   :20546.766   Max.   :48.20  
##  NA's   :17964    NA's   :706         NA's   :7657        NA's   :11632  
##  aged_65_older    aged_70_older    gdp_per_capita     extreme_poverty
##  Min.   : 1.144   Min.   : 0.526   Min.   :   661.2   Min.   : 0.10  
##  1st Qu.: 3.466   1st Qu.: 2.063   1st Qu.:  4466.5   1st Qu.: 0.60  
##  Median : 6.378   Median : 3.871   Median : 12951.8   Median : 2.20  
##  Mean   : 8.781   Mean   : 5.558   Mean   : 19280.6   Mean   :13.43  
##  3rd Qu.:14.312   3rd Qu.: 8.678   3rd Qu.: 27216.4   3rd Qu.:21.20  
##  Max.   :27.049   Max.   :18.493   Max.   :116935.6   Max.   :77.60  
##  NA's   :12692    NA's   :12154    NA's   :11227      NA's   :42301  
##  cardiovasc_death_rate diabetes_prevalence female_smokers   male_smokers  
##  Min.   : 79.37        Min.   : 0.990      Min.   : 0.10   Min.   : 7.70  
##  1st Qu.:168.71        1st Qu.: 5.310      1st Qu.: 1.90   1st Qu.:21.60  
##  Median :242.65        Median : 7.110      Median : 6.30   Median :31.40  
##  Mean   :258.74        Mean   : 7.955      Mean   :10.58   Mean   :32.71  
##  3rd Qu.:329.94        3rd Qu.:10.080      3rd Qu.:19.30   3rd Qu.:41.10  
##  Max.   :724.42        Max.   :30.530      Max.   :44.00   Max.   :78.10  
##  NA's   :11286         NA's   :8697        NA's   :32041   NA's   :33127  
##  handwashing_facilities hospital_beds_per_thousand life_expectancy
##  Min.   :  1.19         Min.   : 0.100             Min.   :53.28  
##  1st Qu.: 19.35         1st Qu.: 1.300             1st Qu.:67.92  
##  Median : 49.84         Median : 2.400             Median :74.62  
##  Mean   : 50.80         Mean   : 3.027             Mean   :73.24  
##  3rd Qu.: 83.24         3rd Qu.: 3.861             3rd Qu.:78.74  
##  Max.   :100.00         Max.   :13.800             Max.   :86.75  
##  NA's   :58608          NA's   :19813              NA's   :5417   
##  human_development_index excess_mortality
##  Min.   :0.394           Min.   :-95.59  
##  1st Qu.:0.602           1st Qu.:  0.43  
##  Median :0.744           Median :  7.44  
##  Mean   :0.727           Mean   : 18.13  
##  3rd Qu.:0.848           3rd Qu.: 23.94  
##  Max.   :0.957           Max.   :410.12  
##  NA's   :11174           NA's   :102683
# find extreme rows
covid_data %>% arrange(desc(total_cases))
## # A tibble: 106,356 x 60
##    iso_code continent location date       total_cases new_cases new_cases_smoot~
##    <chr>    <fct>     <chr>    <date>           <dbl>     <dbl>            <dbl>
##  1 OWID_WRL <NA>      World    2021-07-31   197872222    506087          589881.
##  2 OWID_WRL <NA>      World    2021-07-30   197366135    739604          585280.
##  3 OWID_WRL <NA>      World    2021-07-29   196626531    658221          578041.
##  4 OWID_WRL <NA>      World    2021-07-28   195968310    634006          564770.
##  5 OWID_WRL <NA>      World    2021-07-27   195334304    613517          554161.
##  6 OWID_WRL <NA>      World    2021-07-26   194720787    542415          542521.
##  7 OWID_WRL <NA>      World    2021-07-25   194178372    435316          536420.
##  8 OWID_WRL <NA>      World    2021-07-24   193743056    473879          534600.
##  9 OWID_WRL <NA>      World    2021-07-23   193269177    688931          534579.
## 10 OWID_WRL <NA>      World    2021-07-22   192580246    565323          521709.
## # ... with 106,346 more rows, and 53 more variables: total_deaths <dbl>,
## #   new_deaths <dbl>, new_deaths_smoothed <dbl>, total_cases_per_million <dbl>,
## #   new_cases_per_million <dbl>, new_cases_smoothed_per_million <dbl>,
## #   total_deaths_per_million <dbl>, new_deaths_per_million <dbl>,
## #   new_deaths_smoothed_per_million <dbl>, reproduction_rate <dbl>,
## #   icu_patients <dbl>, icu_patients_per_million <dbl>, hosp_patients <dbl>,
## #   hosp_patients_per_million <dbl>, weekly_icu_admissions <dbl>, ...

What are the metadata columns that describe our observations?

continent 
location  
date

Why do we have observations with the continent as NA?

# check which location have continent as NA
covid_data %>% filter(is.na(continent)) %>% count(location)
## # A tibble: 9 x 2
##   location           n
##   <chr>          <int>
## 1 Africa           535
## 2 Asia             557
## 3 Europe           556
## 4 European Union   556
## 5 International    541
## 6 North America    557
## 7 Oceania          554
## 8 South America    526
## 9 World            557
Some rows contain summarised data of entire continents/World, we'll need to remove those

We can see that most of our data contains ‘0’ (check the difference between the median and the mean in total_cases and total_deaths columns). Just to confirm that, let’s plot a histogram of all the confirmed cases

ggplot(covid_data, aes(x=total_cases)) +
  geom_histogram(fill="lightskyblue") +
  theme_bw(def_text_size)

The data is evolving over days (a time-series), to there’s no point treating it as a random population sample.

Time-series plot

Let’s look at confirmed cases and total deaths data for the 10 most affected countries (to date). To find out these countries so we need to wrangle our data a little bit using the following steps:

  1. First we remove all observations for combined continents with filter(!is.na(continent)
  2. Then we group it by location with group_by()
  3. Then we sort it within each location by date (from latest to earliest) with arrange(desc(date))
  4. We select just the most recent data point from each location with slice(1) and remove grouping with ungroup()
  5. Next we arrange it by descending order of total deaths and select the top 10 observations (one for each location)
  6. Finally, we subset our original data to contain just the countries from our vector with inner_join()

Optional step:

  1. We can recode the location variable as a factor and order it so the countries will be ordered in the legend by the number of cases with

Then we can look at the data as a table and make a plot with the number of cases in the y-axis and date in the x-axis.

# find the 10 most affected countries (to date)
latest_data <- covid_data %>% filter(!is.na(continent)) %>% 
  group_by(location) %>% arrange(desc(date)) %>% slice(1) %>% ungroup() 
most_affected_countries <- latest_data  %>%  
  arrange(desc(total_deaths)) %>% slice(1:10) %>% 
  select(location)

# subset just the data from the 10 most affected countries and order them from the most affected to the least one
most_affected_data <- covid_data %>% 
  inner_join(most_affected_countries) %>% 
  mutate(Country=factor(location, levels = most_affected_countries$location))

# create a line plot the data of total cases
ggplot(most_affected_data, aes(x=date, y=total_cases, colour=Country)) +
  geom_line(size=0.75) + scale_y_continuous(labels=comma) + 
  scale_color_paletteer_d("rcartocolor::Bold") +
  labs(color="Country", y = "Total COVID-19 cases") +
  theme_bw(def_text_size)

It’s a bit hard to figure out how the pandemic evolved because the numbers in US, Brazil and India are an order of magnitude larger than the rest (which are very close to each other). How can we make it more visible (and also improve how of the dates appear in the x-axis)?

# better formatting of date axis, log scale 
plot <- ggplot(most_affected_data, aes(x=date, y=total_cases, colour=Country)) +
  geom_line(size=0.75) + scale_y_log10(labels=comma) + 
  scale_x_date(NULL,
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  scale_color_paletteer_d("rcartocolor::Bold") +
  labs(color="Country", y = "Total COVID-19 cases") +
  theme_bw(def_text_size)
# show an interactive plot
ggplotly(plot)

Why did we get a warning message and why the graphs don’t start at the bottom of the x-axis? How can we solve it? What can we infer from the graph (exponential increase)?

What happens when we take the log of 0?? Can we remove those 0s with the `filter()` function (or add a very small number to them)?
We can see a very similar trend for most countries and while the curve has flattened substantially in April last year, the numbers are still rising. It is also evident that Europe got hit by a second wave arount October last year and India in April this year.

Total deaths

Let’s have a look at the total deaths in these countries (and get rid of the minor grid lines to make Frank happy)

# create a line plot the data of total deaths
ggplot(most_affected_data, aes(x=date, y=total_deaths, colour=Country)) +
  geom_line(size=0.75) + scale_y_continuous(labels=comma) + 
  scale_x_date(NULL,
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  scale_color_paletteer_d("rcartocolor::Bold") +
  labs(color="Country", y = "Total deaths") +
  theme_bw(def_text_size) +  
  theme(panel.grid.minor = element_blank()) # remove minor grid lines

Vaccination rates

Let’s have a look at the number of vaccinated people.

# vaccination rates
ggplot(most_affected_data, aes(x=date, y=people_vaccinated, colour=Country)) +
  geom_line(size=0.75) + scale_y_continuous(labels=comma) + 
  scale_color_paletteer_d("rcartocolor::Bold") +
  scale_x_date(NULL,
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  labs(color="Country") +
  theme_bw(def_text_size) + 
  theme(panel.grid.minor = element_blank())

The graphs are “broken”, meaning that it is not continuous and we have some missing data.
Let’s visualise some of the variables in our data and assess “missingness”.

# visualise missingness
vis_dat(covid_data %>% filter(date>dmy("01-01-2021")) %>% 
          select(continent, location, total_cases, total_deaths, 
                 hosp_patients, people_vaccinated, people_fully_vaccinated))

# find which countries has the most number of observations (least missing data)
covid_data %>% filter(!is.na(continent), !is.na(people_vaccinated)) %>% # group_by(location) %>% 
  count(location) %>% arrange(desc(n)) %>% print(n=30)
## # A tibble: 215 x 2
##    location           n
##    <chr>          <int>
##  1 Norway           240
##  2 Canada           230
##  3 Israel           225
##  4 Denmark          224
##  5 Latvia           221
##  6 Liechtenstein    220
##  7 Switzerland      220
##  8 Czechia          218
##  9 Austria          217
## 10 Chile            217
## 11 Estonia          217
## 12 Italy            217
## 13 Lithuania        217
## 14 Slovenia         217
## 15 Bahrain          216
## 16 Germany          216
## 17 France           215
## 18 Belgium          214
## 19 Romania          210
## 20 Slovakia         208
## 21 United States    207
## 22 United Kingdom   206
## 23 Portugal         202
## 24 Greece           197
## 25 Bulgaria         196
## 26 Malta            194
## 27 Mexico           194
## 28 India            190
## 29 Argentina        189
## 30 Poland           182
## # ... with 185 more rows
covid_data %>% filter(!is.na(continent), !is.na(hosp_patients)) %>% # group_by(location) %>% 
  count(location) %>% arrange(desc(n)) %>% print(n=30)
## # A tibble: 29 x 2
##    location           n
##    <chr>          <int>
##  1 France           525
##  2 Estonia          519
##  3 Italy            518
##  4 Israel           516
##  5 Portugal         516
##  6 Sweden           515
##  7 Netherlands      513
##  8 Czechia          512
##  9 Canada           509
## 10 Hungary          508
## 11 Cyprus           504
## 12 Ireland          503
## 13 Slovenia         503
## 14 Belgium          498
## 15 Luxembourg       497
## 16 United Kingdom   490
## 17 Latvia           482
## 18 Austria          481
## 19 Bulgaria         476
## 20 Poland           460
## 21 Croatia          453
## 22 Slovakia         452
## 23 Norway           450
## 24 Denmark          415
## 25 United States    375
## 26 Iceland          351
## 27 Finland          315
## 28 Spain            233
## 29 Lithuania        216

Hospitalisation and Vaccination rates

Now we can focus on a subset of countries that have more complete vaccination and hospitalisation rates data, so we could compare them.

countries_subset <- c("Italy", "United States", "Israel", "United Kingdom",  "France", "Canada", "Czechia")
# subset our original data to these countries
hosp_data <- covid_data %>% filter(location %in% countries_subset)
# define a new colour palette for these countries
col_pal <- "ggsci::category10_d3"

Let’s look at hospitalisation rates first.

ggplot(hosp_data, aes(x=date, y = hosp_patients,colour=location)) +
  geom_line(size=0.75) + scale_y_continuous(labels=comma) + 
  scale_color_paletteer_d(col_pal) +
  scale_x_date(name = NULL,
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  labs(color="Country", y = "Hospitalised patients") +
  theme_bw(def_text_size) + 
  theme(panel.grid.minor = element_blank())

Can you identify the “waves” in each country?
It’s hard to see the details in the countries with lower number of hospitalised patients, how can we improve the visualisation?

Look at hospitalision rates proportional to the population size!
# hosp per population size
p1 <- ggplot(hosp_data, 
       aes(x=date, y = hosp_patients_per_million,colour=location)) +
  geom_line(size=0.75) + 
  scale_y_continuous(labels=comma) + 
  scale_color_paletteer_d(col_pal) +
  scale_x_date(name = NULL,
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  labs(color="Country", y = "Hospitalised patients (per million)") +
  theme_bw(def_text_size) + 
  theme(panel.grid.minor = element_blank())
p1

Now let’s try to compare this to vaccination rates

# total vaccination per population
p2 <- ggplot(hosp_data, 
             aes(x=date, y = people_fully_vaccinated_per_hundred ,colour=location)) +
  geom_line(size=0.75, linetype="dashed") + 
  guides(color = guide_legend(override.aes = list(linetype="solid") ) ) +
  scale_y_continuous(labels=comma) + 
  scale_color_paletteer_d(col_pal) +
  scale_x_date(name = NULL,
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  labs(color="Country", y = "Fully vaccinated (per hundred)") +
  theme_bw(def_text_size) + 
  theme(panel.grid.minor = element_blank())
p2

What will be the best way to compare these values?

(p1 + guides(color=FALSE))+ p2 + plot_layout(guides = 'collect')# show graphs side by side

Maybe like this:

(p1 + guides(color=FALSE)) / (p2 + theme(legend.position = "bottom")) #+ plot_layout(guides = 'collect')# show graphs on top of each other

Any suggestions?

There's a lot of empty "real estate" in the vaccination graph, maybe we could trim off 2020?
p3 <- ggplot(hosp_data, 
             aes(x=date, y = people_fully_vaccinated_per_hundred ,colour=location)) +
  geom_line(size=0.75, linetype="dashed") +
  guides(color = guide_legend(override.aes = list(linetype="solid") ) ) +
  scale_y_continuous(labels=comma) + 
  scale_color_paletteer_d(col_pal) +
  scale_x_date(name = NULL,
               limits = c(dmy("01-01-2021"), NA),
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  labs(color="Country", y = "Fully vaccinated (per hundred)") +
  theme_bw(def_text_size) + 
  theme(panel.grid.minor = element_blank())
(p1 + guides(color=FALSE)) + p3 + plot_layout(guides = 'collect', widths = c(2, 1)) # maybe like this?

Let’s try to present them on the same graph (note the trick with the secondary y-axis).

# show on the same graph
p4 <- ggplot(hosp_data, 
       aes(x=date, colour=location)) +
  geom_line(aes(y = hosp_patients_per_million), size=0.75) + 
  geom_line(aes(y = people_fully_vaccinated_per_hundred*10), size=0.75, linetype="dashed") + 
  scale_y_continuous(labels=comma, name = "Hospitalised patients per million (solid)",
                     # Add a second axis and specify its features
                     sec.axis = sec_axis(trans=~./10,  name="Fully vaccinated per hundred (dashed)")) + 
  scale_color_paletteer_d(col_pal) +
  scale_x_date(NULL,
               breaks = breaks_width("2 months"), 
               labels = label_date_short()) + 
  labs(color="Country") +
  theme_bw(def_text_size) + 
  theme(panel.grid.minor = element_blank()) 
p4

Probably best to present them on the same graph (note the trick with the secondary y-axis), but for each country separately (done with the facet_wrap() function).

# show on the same graph, but separate each country
p4 + 
  scale_x_date(NULL,
               breaks = breaks_width("4 months"), 
               labels = label_date_short()) +
  facet_wrap(~location)

Anything to worry about???

Save the plot to a folder.

# create output folder
dir.create("./output", showWarnings = FALSE)
# save the plot to pdf file
ggsave("output/hospit_vacc_rates_facet_country.pdf", width=14, height = 8)

Questions

  1. What other variables we could analyse?
  2. Any correlated variables?
  3. What we should take into account that might bias the results or the true status of the pandemic?
1. Mortalities (Case Fatality Rate)?  
2. Suggestions? (cases per population density, vaccination rates by country income, deaths by number of beds per capita, etc.
3. Level of reporting in each country...  

Additional Resources

  • Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE) data repository and website
  • EU 14-days COVID-19 data for download in CSV/Excel format link
  • Be awesome in ggplot2: A Practical Guide to be Highly Effective – R software and data visualization (link)
  • The R graph gallery - From the creator of “Data to Viz” - a comprehensive gallery of R charts, with reproducible code examples (link)
  • COVID-19 vaccination data in Our World in Data site
  • My very own COVID-19 dashboard (created in R, needs updating)

Contact

Please contact me at i.bar@griffith.edu.au for any questions or comments.

References

Mathieu E, Ritchie H, Ortiz-Ospina E, et al. (2021) A global database of COVID-19 vaccinations. Nat Hum Behav 5:947–953. doi: 10.1038/s41562-021-01122-8